rust: rename Guard to GuardMut.#520
Conversation
This is in preparation for the introduction of `Guard`, which won't implement `DerefMut`. It will be used in sequence locks where the protected data cannot be directly modified because it can be accessed concurrently by readers. This is a pure refactor with no functional changes intended. Signed-off-by: Wedson Almeida Filho <wedsonaf@google.com>
|
Does this mean we'll use |
This is a different dimension: the guard for the write-side critical section of a sequential lock only exposes immutable references to the underlying data. So if we were to use I'm not really a big fan of the |
|
Seqlocks probably require their own abstraction, e.g. https://docs.rs/seqlock/. (I would probably change the read method to take the signature |
The seqlock in the kernel behaves differently, we don't necessarily want (and can't in some cases) copy a whole T out of the lock. Anyway, the read side is definitely different enough that we don't want to try to merge it into the existing abstractions, but the write side is very much conventional -- ideally it should work as a lock for |
|
I am not sure if exposing immutable references to seqlock write side is the correct abstraction. I think it's better a have the better picture of seqlock API design before renaming this. |
Unlike rwlock, for seqlock (and RCU), read side and write side are not mutually exclusive, so the uniqueness of the references in the write side cannot be guaranteed.
I agree. Having some POC code definitely help discussion. @wedsonaf I think you actually have the draft implementation already? |
|
I wonder whether we can add "mutability" to interior mutable type? Something as the following: pub struct NonRaceUsize {
val: AtomicUsize
}
impl NonRaceUsize {
pub fn store(&mut self, v: usize) {
self.val.store(v, Ordering::Relaxed);
}
pub fn load(&self) -> Usize{
self.val.load(Ordering::Relaxed);
}
}Will it still be UB if we have (and access) both |
In LLVM there is an "unordered" ordering that is more relaxed than "relaxed" ordering; it basiscally is a non-atomic access that is defined to be not UB when data race happens. So it's okay if one side is having mutable access and another is only performing "unordered" read access. There is some use of it in compiler builtins but since it's a LLVM thing and not formally defined in Rust so we definitely want to avoid it. |
It would still be UB under Rust memory model. |
|
I had this idea: https://godbolt.org/z/63qsTsave There are tons of scaffolding, but the usage is really just the last few lines. The key idea is to use |
Or formally define it? ;-) Rust definitely don't want to rely on a particular feature of the backend, but can ask the backends to provide certain guarantee, right? The basic idea is that data races are fine as long as the value of a raced read is never "used", seqlock is one example: reads are considered to be correct only when sequence counts are stable (during begin and retry), which means we are racing with a write side critical section (in memory effects). That said, the proper definition of "used" may be challenging, there are cases in Linux kernel where a value of a raced read is used: 1) for diagnostic code, which doesn't care that much about correctness, 2) fast check for whether a fastpath can be used, e.g. BTW, the implementation of https://docs.rs/seqlock/ also has UB, right? |
Technically yes. Practically, it couldn't be exploited by the compiler. The volatile read is essentially used to mimic the unordered load; it'll be UB-free if unordered load is used (at least according to LLVM semantics). |
Try to see if I understand your point, it works because current Rust memory model allows the coexistence |
Yes. Two pointers alias if the memory regions they point to overlap. Since ZST pointers point to a memory region of 0 bytes, if cannot alias with any pointer by definition.
Yes |
Here's the commit for the seqlock: f0e3f12 |
I like the idea of using a ZST, but how is one supposed to use this? Please don't tell me we need a proc macro to parse type definitions and generate all this code. |
|
I opened #537 so that we can investigate other designs for seqlocks. For now I'll submit this one and we can replace it later if we find a better one. |
If the argument check during an array bind fails, the bind_ops are freed twice as seen below. Fix this by setting bind_ops to NULL after freeing. ================================================================== BUG: KASAN: double-free in xe_vm_bind_ioctl+0x1b2/0x21f0 [xe] Free of addr ffff88813bb9b800 by task xe_vm/14198 CPU: 5 UID: 0 PID: 14198 Comm: xe_vm Not tainted 6.16.0-xe-eudebug-cmanszew+ #520 PREEMPT(full) Hardware name: Intel Corporation Alder Lake Client Platform/AlderLake-P DDR5 RVP, BIOS ADLPFWI1.R00.2411.A02.2110081023 10/08/2021 Call Trace: <TASK> dump_stack_lvl+0x82/0xd0 print_report+0xcb/0x610 ? __virt_addr_valid+0x19a/0x300 ? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe] kasan_report_invalid_free+0xc8/0xf0 ? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe] ? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe] check_slab_allocation+0x102/0x130 kfree+0x10d/0x440 ? should_fail_ex+0x57/0x2f0 ? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe] xe_vm_bind_ioctl+0x1b2/0x21f0 [xe] ? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe] ? __lock_acquire+0xab9/0x27f0 ? lock_acquire+0x165/0x300 ? drm_dev_enter+0x53/0xe0 [drm] ? find_held_lock+0x2b/0x80 ? drm_dev_exit+0x30/0x50 [drm] ? drm_ioctl_kernel+0x128/0x1c0 [drm] drm_ioctl_kernel+0x128/0x1c0 [drm] ? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe] ? find_held_lock+0x2b/0x80 ? __pfx_drm_ioctl_kernel+0x10/0x10 [drm] ? should_fail_ex+0x57/0x2f0 ? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe] drm_ioctl+0x352/0x620 [drm] ? __pfx_drm_ioctl+0x10/0x10 [drm] ? __pfx_rpm_resume+0x10/0x10 ? do_raw_spin_lock+0x11a/0x1b0 ? find_held_lock+0x2b/0x80 ? __pm_runtime_resume+0x61/0xc0 ? rcu_is_watching+0x20/0x50 ? trace_irq_enable.constprop.0+0xac/0xe0 xe_drm_ioctl+0x91/0xc0 [xe] __x64_sys_ioctl+0xb2/0x100 ? rcu_is_watching+0x20/0x50 do_syscall_64+0x68/0x2e0 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7fa9acb24ded Fixes: b43e864 ("drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR") Cc: Matthew Brost <matthew.brost@intel.com> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Link: https://lore.kernel.org/r/20250813101231.196632-2-christoph.manszewski@intel.com (cherry picked from commit a01b704) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
This is in preparation for the introduction of
Guard, which won'timplement
DerefMut. It will be used in sequence locks where theprotected data cannot be directly modified because it can be accessed
concurrently by readers.
This is a pure refactor with no functional changes intended.
Signed-off-by: Wedson Almeida Filho wedsonaf@google.com